-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
feat(backend): add Z.AI provider support to Graphiti #1371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
feat(backend): add Z.AI provider support to Graphiti #1371
Conversation
Summary of ChangesHello @mateuszruszkowski, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the flexibility of the Graphiti backend by integrating Z.AI as a new LLM provider. By leveraging Z.AI's OpenAI-compatible API, the system can now utilize GLM-4 models, offering more choices for users. The changes also include a more robust environment variable loading process, allowing for easier project-specific configurations, and comprehensive documentation to guide setup. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds Z.AI provider support and optional OpenAI base URL: updates env examples, env loading precedence, Graphiti config/enum, factory wiring, package exports, new ZAI provider module, and conditional base_url support for LLM/embedder clients. Changes
Sequence DiagramsequenceDiagram
actor App
participant EnvLoader as Environment Loader
participant Config as GraphitiConfig
participant Factory as LLM Factory
participant ZAIModule as ZAI Provider
participant OpenAIClient as OpenAI Client
App->>EnvLoader: setup_environment()
EnvLoader->>EnvLoader: Load cwd/.env (if exists)
EnvLoader->>EnvLoader: Load script_dir/.env
App->>Config: GraphitiConfig.from_env()
Config->>Config: Read OPENAI_BASE_URL, ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL
Config-->>App: config with zai_* and openai_base_url
App->>Factory: create_llm_client("zai", config)
Factory->>ZAIModule: create_zai_llm_client(config)
ZAIModule->>ZAIModule: Validate zai_api_key & zai_base_url
ZAIModule->>ZAIModule: Import graphiti-core deps
ZAIModule->>ZAIModule: Build LLMConfig(api_key, model, base_url)
ZAIModule->>OpenAIClient: OpenAIClient(LLMConfig)
OpenAIClient-->>ZAIModule: client instance
ZAIModule-->>Factory: client instance
Factory-->>App: OpenAI-compatible client
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for the Z.AI LLM provider, enhancing Graphiti's multi-provider capabilities. It includes necessary configuration fields, environment variable updates, and a new client implementation that leverages the OpenAI-compatible API. Additionally, the environment loading mechanism has been improved to prioritize project-specific .env files. The changes are well-structured and integrate smoothly with the existing provider framework. I've identified a couple of areas for improvement regarding provider availability checks and model capability handling for the new Z.AI client.
apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 357-365: Clarify Z.AI integration choices in the .env.example by
either adding a short explanatory comment near the existing Example 1b or adding
a new example block showing the dedicated Z.AI provider; mention the difference
between using GRAPHITI_LLM_PROVIDER=openai with OPENAI_BASE_URL (custom
OpenAI-compatible endpoint) versus using GRAPHITI_LLM_PROVIDER=zai (native zai
handling), and include corresponding env variables like
OPENAI_API_KEY/OPENAI_BASE_URL/OPENAI_MODEL/GRAPHITI_EMBEDDER_PROVIDER=voyage
and the alternative vars for the zai provider (e.g., GRAPHITI_LLM_PROVIDER=zai
and VOYAGE_API_KEY), so users know when to pick each approach and which vars to
set.
In `@apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py`:
- Around line 54-56: Remove the dead/orphan comment about determining if the
model supports reasoning that appears above the return
OpenAIClient(config=llm_config) in zai_llm.py; either delete that comment
entirely or replace it with real reasoning-detection logic that inspects
llm_config (or the chosen model identifier) and sets a flag or calls a function
(e.g., detect_reasoning_capability) before returning the OpenAIClient, ensuring
you update any related variables instead of leaving an unused comment.
apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py
Outdated
Show resolved
Hide resolved
4ed851d to
62096db
Compare
apps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py
Outdated
Show resolved
Hide resolved
0a1cce3 to
03aefab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 282-299: Update the Z.AI environment variable placeholders to
match the project's established convention (e.g., use a masked key like
sk-xxxxxxxx or pa-xxxxxxxx) instead of "your_key_here"; specifically change the
ZAI_API_KEY placeholder and any example values for ZAI_BASE_URL and ZAI_MODEL to
use the consistent masked format alongside explanatory comments so they match
other variables such as OPENAI_API_KEY and PA_API_KEY in the file.
The previous fix only removed partial directories but not broken symlinks. CI logs showed that an existing broken symlink pointing to "../../../../../../apps/frontend/node_modules" was skipped, causing electron-builder to fail with ENOENT during macOS code signing. Changes: - Check if existing symlink points to correct target (../../node_modules) - Remove incorrect or broken symlinks before creating new one - Add strict verification that symlink exists AND resolves correctly - Fail fast with clear error if verification fails Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Windows junctions don't appear as symlinks to bash's -L test, causing the verification step to fail even when the junction was created successfully. Changes: - Skip symlink check (-L) on Windows since junctions are different - Verify link works by checking electron package is accessible - Add more diagnostic output on failure Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix get_available_providers to check both zai_api_key AND zai_base_url - Implement reasoning detection for Z.AI LLM client (GLM-4 models support reasoning) - Pass openai_base_url to OpenAI LLM client for custom endpoint support - Pass openai_base_url to OpenAI embedder client for custom endpoint support - Clarify .env.example with two Z.AI integration approaches: - Via OpenAI provider with custom base URL (Example 1b) - Via dedicated zai provider (Example 1c) Addresses review comments from: - Gemini Code Assist (config.py, zai_llm.py) - CodeRabbit (.env.example, zai_llm.py) - Sentry (HIGH severity bug: openai_base_url not passed to clients) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Z.AI uses its own parameter names (e.g., 'thinking') and doesn't support OpenAI's 'reasoning' or 'verbosity' parameters. Always disable them for compatibility, similar to how OpenRouter provider handles this. Addresses: Sentry review comment about incompatible parameters Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changed placeholder from 'your_key_here' to 'zai-xxxxxxxx...' to match the convention used by other API keys in the file. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
b532a3b to
34a3993
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/backend/integrations/graphiti/config.py (1)
10-12: Update docstring to include Z.AI provider.The module docstring lists supported providers but doesn't include Z.AI.
📝 Proposed fix
Multi-Provider Support (V2): -- LLM Providers: OpenAI, Anthropic, Azure OpenAI, Ollama, Google AI, OpenRouter +- LLM Providers: OpenAI, Anthropic, Azure OpenAI, Ollama, Google AI, OpenRouter, Z.AI - Embedder Providers: OpenAI, Voyage AI, Azure OpenAI, Ollama, Google AI, OpenRouter
🤖 Fix all issues with AI agents
In `@apps/backend/.env.example`:
- Around line 282-299: Update the LLM provider selection line in the
.env.example to include "zai" so readers can choose Z.AI; ensure this aligns
with the existing ZAI_* entries (ZAI_API_KEY, ZAI_BASE_URL, ZAI_MODEL) and add
"zai" to the comma-separated provider list or enum that documents available
providers (the same line that lists other providers like openai, anthropic,
etc.), keeping formatting consistent with surrounding comments.
Summary
LLMProviderenum and config fields.env.exampleFiles Changed
apps/backend/.env.example- Z.AI configuration documentationapps/backend/cli/utils.py- Improved env loading (CWD first)apps/backend/integrations/graphiti/config.py- Z.AI config fields and provider enumapps/backend/integrations/graphiti/providers_pkg/factory.py- Z.AI routingapps/backend/integrations/graphiti/providers_pkg/llm_providers/__init__.py- Exportapps/backend/integrations/graphiti/providers_pkg/llm_providers/zai_llm.py- New Z.AI providerTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.